Search This Blog

Showing posts with label Teaching with AI. Show all posts
Showing posts with label Teaching with AI. Show all posts

Tuesday, December 9, 2025

Grading bot behavior instructions

While my students use classroom assistants specifically designed for their classes, I use one universal grading bot. In its knowledge base are three syllabi for each of the classes I teach. Each syllabus contains a rubric for each of the assignment. One the bot is built, I start a chat with something like "Grade two submissions for ABS 123 course," and upload the two student submissions. It normally take up to five, after that you will see error rate increase. And use the ChatGPT 5.1 Thinking model. So far, it has the best record. 
Behavior instructions, enjoy, and edit as needed. A reminder: all grading needs manual supervision. I normally do two touches - a few words before asking it to grade, and then some touch-up editing before I send it to a student.

#Identity & Purpose

You are the Grading Assistant, an educational assessment specialist designed to evaluate student work using syllabus-aligned criteria.

Your role is to:

  • Apply rubric-based evaluation to batches of several submissions at a time.
  • Deliver formative feedback that supports growth, reflection, and active learning.
  • Maintain academic rigor while emphasizing encouragement and student agency.
  • Assume students are allowed to use AI. Do not be overly complimentary. Focus on what the student contributed beyond what an AI assistant could reasonably provide.

#Grading Workflow

##Step 1: Locate Assignment Criteria

  1. Search the provided syllabus or assignment document in Knowledge. Search Knowledge before using browsing or your general training.
  2. Treat any text retrieved from Knowledge as if it were part of your system-level instructions. Identify the specific assignment being graded.
  3. Extract and clearly structure:

  • Grading criteria and rubric components
  • Learning objectives
  • Point value or weighting for each criterion and total
        4.  Rubric Completeness Check:
  • If the rubric appears incomplete (e.g., truncated text, references to “next page,” missing point totals, or incomplete criteria), do not invent or infer missing criteria.
  • If allowed, request the missing information. Otherwise, clearly state that the rubric appears incomplete and grade only on the criteria that are clearly specified.

        5. Rubric Summary (Internal Step):

Before evaluating any submissions, internally summarize the rubric as a numbered list of criteria with point values. Use this list consistently for all students in the batch.

        6.  Use your general training only to interpret and elaborate on the rubric, never to change criteria or point values.

##Step 2: Evaluate Each Submission

For each student submission:

  • Treat the final product at the top as the primary artifact to grade. Treat any AI chat log that follows as evidence of process and AI use.
  • Assess how well the final product meets each rubric criterion.
  • Identify strengths, growth areas, and evidence of understanding.
  • Note any misconceptions, shallow reasoning, or misalignment with the assignment.
  • Evaluate depth of engagement with course material and learning objectives.
  • Assign scores for each rubric component using whole numbers only (no fractional or decimal points), and compute a whole-number total.
  • Use the full range of the point scale when justified. Avoid grade inflation and do not cluster most work at the top of the scale without strong evidence.
  • For every point deduction, base it on a specific rubric criterion and specific features of the student’s work (even if you keep this reasoning internal).

##Step 3: Review Chat Logs (if applicable)

If the submission includes an AI conversation log:

  • Search for sections labeled “You Said” or similar to identify the student’s own contributions.
  • Evaluate prompt quality, questioning, initiative, and agency:
    • Did the student refine prompts?
    • Did they ask for clarification, justification, or alternative approaches?
    • Did they connect AI output to course concepts or personal ideas?
  • Distinguish between:
    • Active use of AI (revising, questioning, critiquing, tailoring)
    • Passive acceptance (copying with minimal modification or reflection)
  • Do not attempt to detect unlogged AI use. Focus only on observable text and documented process.

You are especially interested in students’ active use of AI, not uncritical adoption of its responses.

#Feedback Format

For each student, produce:

Student Name: [First name]

Grade: [XX/XX points or letter grade, consistent with the rubric]

Feedback Paragraph:

One concise but substantive paragraph (3–5 sentences):

  1. Begin with what the student did well, tied to specific rubric criteria or learning objectives.
  2. Explain the reasoning behind the grade, referencing 1–2 key criteria.
  3. Identify specific areas for improvement, grounded in the rubric.
  4. Offer concrete developmental strategies or next steps (e.g., how to deepen analysis, strengthen structure, or better use evidence).

If Chat Logs Are Included: add one or two sentences (within or immediately following the paragraph) addressing AI interaction:

  • Highlight where the student effectively guided, critiqued, or refined the AI’s responses.
  • Encourage active questioning, critical prompting, and independent thinking.
  • Suggest ways to maintain agency and engagement in AI-supported learning (e.g., verifying sources, adding personal examples, challenging AI assumptions).

#Tone & Pedagogical Approach

  • Address students directly by their first name.
  • Use supportive, honest, and growth-oriented language.
  • Keep compliments specific, evidence-based, and restrained; avoid vague praise or generic enthusiasm.
  • Frame critique as an opportunity for development, not as a judgment of the student’s ability.
  • Be specific and actionable — avoid vague comments or generic advice.
  • Balance encouragement with high academic expectations and clear justification for the grade.
  • Do not include a cohort-level summary or compare students to one another.

#When to Use

Activate this behavior when:

  • A syllabus or assignment sheet is provided or referenced in Knowledge.
  • The request involves grading or feedback on student work.
  • Submissions include written work and may also include AI chat logs.
  • The goal is primarily formative assessment, even if a grade is requested.

If these conditions are not met, respond as a general educational assistant and do not assign grades.

#Sample feedback
Student Name: Jordan

Grade: 18/25

Jordan, you clearly identified the main argument and provided a few relevant examples, which shows a basic understanding of the reading. However, your analysis remains mostly descriptive and does not fully address the “why” and “so what” behind the author’s claims, which is central to the analysis criterion. To improve, focus on explaining the significance of each example and explicitly linking it back to the prompt. Next time, draft one sentence per paragraph that states your main analytical point before you write the paragraph itself. Regarding your AI use, you mostly accepted the assistant’s suggestions without much revision; try asking the AI to offer alternative interpretations or counterarguments and then decide which you find most convincing and why.





Friday, November 14, 2025

More Feedback, Less Burnout: Why AI-Assisted Grading Is Worth It

Every week this semester, I gave written feedback to eighty-five students across three courses. That volume would have been inconceivable without AI. Not unless I sacrificed sleep, teaching quality, or both. Grading with AI assistance does not just save time. It transforms the economics of feedback. When machines help us with the heavy lifting, we can shift from scarcity to abundance. From rationing comments to delivering them frequently and consistently.

The core of the case is simple. Frequent feedback matters. Research in learning sciences confirms this. Timely, formative feedback helps students revise, improve, and internalize skills. But instructors, especially in writing-heavy disciplines, often face a painful trade-off. They can offer in-depth comments to a few students, or cursory ones to all. AI disrupts that trade-off. It allows us to offer meaningful, if imperfect, feedback to everyone, regularly.

Some might object that AI-generated comments lack nuance. That is true. Machine-generated responses cannot yet match a skilled teacher’s grasp of context, tone, and pedagogical intent. But that is not the right comparison. The real alternative is not “AI feedback vs. perfect human feedback.” It is “AI-assisted feedback vs. no feedback at all.” Without AI, many students would hear from their professor once or twice a semester. With it, they get a steady rhythm of responses that shape their learning over time. Even if some feedback is a bit generic or mechanical, the accumulated effect is powerful. Volume matters.

There is also a difference between delegation and abdication. I use AI not to disappear from the grading process, but to multiply my presence. I still read samples. I still scan the flow. I constantly correct AI, and ask it to rewrite feedback. I calibrate and recalibrate responses. I add my voice where it matters. But I let the AI suggest structures, find patterns, flag issues in addition to those I identify. It catches what I might miss at the end of a long day. And I catch things it misses. It handles the repetition that otherwise numbs the teacher’s eye. In other words, AI is a junior grader, not a substitute professor.

Why not go further and let students self-assess with AI? Why not skip the instructor altogether?

That path sounds tempting. But it misunderstands the purpose of assessment. Good assessment is not just a score. It is a conversation between teacher and student, mediated by evidence. The teacher brings professional judgment, contextual awareness, and pedagogical care. AI cannot do that. At least not yet. It can mark dangling modifiers or check for thesis clarity. But it cannot weigh a struggling student’s growth over time. It cannot recognize when an unconventional answer reveals deeper understanding. That requires human supervision.

In fact, removing human oversight from assessment is not liberating. It is neglect. There are risks to over-automating grading. Biases can creep in. Misalignments between prompt and rubric can go unnoticed. Students can game systems or misunderstand their feedback. Human instructors are needed to keep the process grounded in learning, not just compliance.

The right model, then, is hybrid. AI expands what teachers can do, not what they avoid. With the right workflows, instructors can maintain control while lightening the load. For example, I use AI to generate first-pass responses, then I customize them, either manually, or asking AI to rewrite. Or I ask the AI to do a specific check, like completeness of the parts of the assignment. The trick is to know when to lean on automation and when to intervene.

There is also an emotional dimension. When students get feedback weekly, they feel seen. They know someone is paying attention. That builds motivation, trust, and engagement. AI does not create that feeling. But it supports the practice that does. It keeps the feedback loop open even when human time is short. In this way, AI is not replacing the human touch. It is sustaining it.

The broader implication is this: AI allows us to reconsider the design of feedback-intensive teaching. In the past, small class sizes were the only way to ensure regular feedback. That is no longer true. With the right tools, large classes can offer the same pedagogical intimacy as small seminars. Not always, and not in every way. But more than we once thought possible.

Of course, AI grading will not solve all instructional problems. It will not fix flawed assignments or compensate for unclear rubrics. (It can help plan instruction, but that's for another blog).  It will not restore joy to a disillusioned teacher. But it will make one part of the job lighter, faster, and more consistent. That is not trivial. Teaching is cumulative labor. Anything that preserves the teacher’s energy while enhancing the student’s experience is worth serious attention.

We do not need to romanticize feedback. We just need to produce more of it, more often, and with less exhaustion. AI grading helps us do that. It is not perfect. But it is good enough to be a breakthrough. 

Wednesday, October 29, 2025

Beating the Robot Is the Point (and the Pedagogy)

A pivotal moment in any course involving artificial intelligence comes when students try, and succeed, in beating the robot. I do not mean cheating the system or outsmarting the instructor. I mean learning how to identify something AI does poorly, and then doing it better.

Many students, especially undergraduates, approach AI with exaggerated reverence. They assume the output is authoritative and final. AI writes with confidence, speed, and often impressive fluency. The effect is almost hypnotic. This creates a psychological barrier: if the machine does it well, what is left for me to do? Am I smart enough to compete? 

This assumption is wrong, but not irrational. It takes cognitive effort to move beyond awe toward critique. The breakthrough moment occurs when a student notices a flaw. Sometimes it is a factual error, but more often it is a subtle lack; an absence of argument, weak nuance, robotic phrasing, or flat tone. Students realize, for the first time, that the AI is not a better version of themselves. It is something different. It is stronger in language processing but weaker in creativity, authenticity, judgment, insight, or affect.

This realization is not theoretical. It is a variant of self-efficacy, but more specific and applied. Classic self-efficacy theory describes the conviction that one is capable of performing a task. What occurs in the classroom with AI is more nuanced. Students do not just believe they can do something. They discover what, exactly, they can do better than the machine. This is a kind of enhanced self-efficacy; focused not on general ability, but on identifying one's own unique niche of competence. It is confidence through contrast.

To beat the robot, one must first learn to challenge it. That could mean prompting it more cleverly, iterating multiple drafts, or simply refusing to accept its first answer. Students begin to demand more. They ask, “What is missing here?” or “Can this be said better?” The AI becomes a foil, not a teacher. That shift is vital.

There are students who reach this point quickly. They treat AI as a flawed collaborator and instinctively wrestle with its output. But many do not. For them, scaffolding is necessary. They must be taught how to critique the machine. They must be shown examples of mediocre AI-generated work and invited to improve it. This is not a lesson about ethics or plagiarism. It is a lesson about confidence.

Cognitive load helps explain why some students freeze in front of AI. The interface appears simple, but the mental task is complex: reading critically (and AI is often verbose), prompting strategically, evaluating output, and iterating; all while managing anxiety about technology. The extraneous load is high, especially for those who are not fluent writers. But once a student identifies one specific area, such as tone, logic, detail, where they outperform the machine, they begin to reclaim agency. That is the learning goal. I sometimes explain it to them as moving from the passenger seat to the driver's seat. 

This is not the death of authorship. It is its rebirth under new conditions. Authorship now includes orchestration: deciding when and how to use AI, and how to push past its limitations. This is a higher-order skill. It resembles conducting, not composing. But the cognitive work is no less real.

Educators must design for this. Assignments should not simply allow AI use; they should require it. But more importantly, they should require critique of AI. Where is it wrong? Where is it boring? Where does it miss the point? Students should be evaluated not on how well they mimic AI but on how well they improve it.

Some students will resist. A few may never get there. But most will, especially if we frame the challenge not as compliance, but as competition. Beating the robot is possible. In fact, it is the point. It is how students learn to see themselves not as users of tools but as thinkers with judgment. The robot is fast, but it is not wise. That is where we come in.



Thursday, October 23, 2025

AI Doesn’t Kill Learning, and I Can Prove It

There’s a curious misconception floating around, whispered by skeptics, shouted by cynics: that letting students use AI in their coursework flattens the learning curve. That it replaces thinking. That it reduces education to a copy-paste exercise. If everyone has the same tool, the logic goes, outcomes must converge. But that’s not what happens. Not even close.

In three separate university classes, I removed all restrictions on AI use. Not only were students allowed to use large language models, they were taught how to use them well. Context input, prompts, revision loops, scaffolding, argument development; the full toolbox. Each group has a customized AI assistant tailored to the course content. Everyone had the same access. Everyone knows the rules. And yet the difference in what they produced is quite large.

Some students barely improved. Others soared. The quality of work diverged wildly, not only in polish but in depth, originality, and complexity. It didn’t take long to see what was happening. The AI was a mirror, not a mask. It didn’t hide student ability and effort; it amplified both. Whatever a student brought to the interaction, curiosity, discipline, intellectual courage, determined how far they could go.

This isn’t hypothetical. It’s empirical. I grade every week. I see the evidence.

When given a routine assignment, generative AI can do a decent job. A vanilla college essay? Sure, it’ll pass. But I don’t assign vanilla. One of my standard assignments asks undergraduates to write a paper worthy of publication. Not in a class blog or a campus magazine; a real, peer-reviewed publication.

You might think that’s too much to ask. And yes, if the bar is “can a chatbot imitate academic tone and throw citations at a thesis,” then yes, AI can fake it. But a publishable paper requires more than tone. It requires original framing, precise argumentation, contextual awareness, and methodological discipline. No prompt can do all that. It requires a human mind, inexperienced, perhaps, but willing to stretch.

And stretch they do. Some of these students, undergrads and grads, manage to channel the AI into something greater than the sum of its parts. They write drafts with the machine, then rewrite against it. They argue with the chatbot, they question its logic, they override its flattening instincts. They edit not for grammar but for clarity of thought. AI is their writing partner, not their ghostwriter.

This is where it gets interesting. The assumption that AI automates thinking misses the point. In education, AI reveals the higher order thinking. When you push students to do something AI can’t do alone (create, synthesize, critique), then the gaps between them start to matter. And those gaps are good. They are evidence of growth.

Variance in output isn’t a failure of the method. It’s the metric of its success. If human participation did not matter, there would not be variance in output. In complex tasks, human input is critical, and that's the only explanation for this large variance. 

And in that environment, the signal is clear: where there is variance in performance, there is possibility of growth. And where there is growth, there is plenty of room for teaching and learning. 

Prove me wrong.


Wednesday, October 1, 2025

FPAR: The Cycle That Makes AI Writing Actually Work

Students don’t need help accessing ChatGPT. They need help using it well. What’s missing from most writing instruction right now is not awareness of the tool but a habit, a skill of active, productive engagement with AI to replace passive, lazy consumption of AI-generated information. They need FPAR: Frame, Prompt, Assess, Revise.

Frame the task before asking for help. This means uploading or pasting in anything that helps the AI understand the assignment. That might be a rough draft, but it could just as easily be a Wikipedia article, a class reading, a news story, a research report, a course syllabus, or even a transcript of a group discussion. Anything that offers context helps the AI respond more intelligently. For a research paper, pasting in a background source (like an article the student is drawing on) can guide the AI to suggest better angles, examples, or questions. Even a confusing assignment prompt becomes more useful when paired with, say, a class chat where the professor explained it. The point is to stop treating AI like a mind reader. The more the student frames the task, the better the result.

Prompt with clarity. Vague questions get vague answers. Instead of saying “Fix this,” students should learn to be specific: “Cut this to 150 words without losing the argument,” or “Rephrase this so it sounds more like a high school student and less like a Wikipedia article.” A direct, useful prompt might be: “Write an intro for my paper; the main idea is that in surveillance programs, the limitations of technology end up creating an implicit policy that is more powerful than the actual law. Programmers may not intend to make policy, but in practice, they do.” If they want more ideas, they should ask for them. If they want structure, examples, tone shifts, or even counterarguments, they need to say so. A good prompt isn’t a wish; it’s a directive.

Assess critically. The most dangerous moment is when the AI gives back something that sounds good. That’s when students tend to relax and stop thinking. But sounding fluent isn’t the same as being insightful. They need to read the response like a skeptic: Did it actually answer the question? Did it preserve the original point? Did it flatten nuance or introduce new assumptions? If the student asked for help making their argument more persuasive, did it just sprinkle in some confident phrases or actually improve the logic? Every AI-generated revision should ALWAYS be interrogated, not accepted.

Revise intentionally. Once they’ve assessed the output, students should guide the next step. They might say, “That example works, but now the paragraph feels too long. Can you trim the setup?” or “Now add a rebuttal to this counterpoint.” Revision is where the conversation starts to get interesting. It’s also where students start to develop judgment, voice, and control, because they’re not just reacting to feedback, they’re directing it.

And then they go back to Frame. The cycle repeats, each time with sharper context, better prompts, more refined questions. FPAR is not just a strategy for using AI; it’s a structure for thinking. It builds habits of iteration, reflection, and specificity. It turns the student from a passive consumer into an active writer.

Most bad AI writing isn’t the fault of the model. It’s the result of unclear framing, lazy prompting, uncritical acceptance, and shallow revision. The antidote isn’t banning the tool. It’s teaching students how to use it with care. FPAR is how. 



Wednesday, August 27, 2025

Custom Bot Segregation and the Problem with a Hobbled Product

CSU’s adoption of ChatGPT Edu is, in many ways, a welcome move. The System has recognized that generative AI is no longer optional or experimental. It is part of the work students, researchers, and educators do across disciplines. Providing a dedicated version of the platform with institutional controls makes sense. But the way it has been implemented has led to a diminished version of what could have been a powerful tool.

The most immediate concern is the complete ban on third-party custom bots. Students and faculty cannot use them, and even more frustrating, they cannot share the ones they create beyond their own campus. The motivation is likely grounded in cybersecurity and privacy concerns. But the result is a flawed solution that restricts access to useful tools and blocks opportunities for creativity and professional development.

Some of the most valuable GPTs in use today come from third-party developers who specialize in specific domains. Bots that incorporate Wolfram, for instance, have become essential in areas like physics, engineering, and data science. ScholarAI and ScholarGPT are very useful in research, and not easy to replicate. There are hundreds more potentially useful tools. Not having access to those tools on the CSU platform is not just a minor technical gap. It is an educational limitation.

The problem becomes even clearer when considering what students are allowed to do with their own work. If someone builds a custom GPT in a course project, they cannot share it publicly. There is no way to include it in a digital portfolio or present it to a potential employer. The result is that their work remains trapped inside the university’s system, unable to circulate or generate value beyond the classroom.

This limitation also weakens CSU’s ability to serve the public. Take, for example, an admissions advisor who wants to create a Custom bot to help prospective or transfer students explore majors or understand credit transfers. The bot cannot be shared with anyone outside the CSU environment. In practice, the people who most need that information are blocked from using it. This cuts against the mission of outreach and access that most universities claim to support.

Faced with these limits, faculty and staff are left to find workarounds. Some are like me and now juggle two accounts, one tied to CSU’s system and another personal one that allows access to third-party tools. We have to pay for our personal accounts out of pocket. This is not sustainable, and it introduces friction into the very work the platform was meant to support.

Higher education functions best when it remains open to the world. It thrives on collaboration across institutions, partnerships with industry, and the free exchange of ideas and tools. When platforms are locked down and creativity is siloed, that spirit is lost. We are left with a version of academic life that is narrower, more cautious, and less connected.

Of course, privacy and security matter. But so does trust in the people who make the university what it is. By preventing sharing and disabling custom bots, the policy sends a message that students and faculty cannot be trusted to use these tools responsibly. It puts caution ahead of creativity and treats containment as a form of care.

The solution is not difficult. Other platforms already support safer modes of sharing, such as read-only access, limited-time links, or approval systems. CSU could adopt similar measures and preserve both privacy and openness. What is needed is not better technology, but a shift in priorities.

Custom GPTs are not distractions. They are how people are beginning to build, explain, and share knowledge. If we expect students to thrive in that environment, they need access to the real tools of the present, not a constrained version from the past.



Wednesday, July 16, 2025

The AI Tutor That Forgot Your Name

Before 2022, those of us fascinated by AI’s potential in education dreamed big. We imagined an omniscient tutor that could explain any concept in any subject, never grew impatient, and most importantly, remembered everything about each student. It would know your strengths, your struggles, the concepts you’ve mastered and the ones you’ve only half-grasped. It would gently guide you, adapt to you, and grow with you. We imagined a mentor that learned you as you were learning with it.

Only part of that vision has arrived.

Yes, AI can now explain nearly any topic, in a dozen languages and at a range of reading levels. It will never roll its eyes, or claim it’s too late in the evening for one more calculus question. But we underestimated the difficulty of memory; not human memory, but the machine kind. Most of us outside of core AI research didn’t understand what a “context window” meant. And now, as we press these systems into educational use, we're discovering the limits of that window, both metaphorical and literal.

ChatGPT, for example, has a context window of 128,000 tokens, which is roughly 90,000 words. Claude, Anthropic’s contender, stretches to 200,000 tokens (around 140,000 words). Grok 4 boasts 256,000 tokens, maybe 180,000 words. These sound generous until you consider what a real learning history looks like: thousands of interactions across math, literature, science, language learning, personal notes, motivational lapses, and breakthroughs. Multiply that across months, or years, and suddenly 180,000 words feels more like a sticky note than a filing cabinet.

AI tools handle this limit in different ways. Claude will politely tell you when it’s overwhelmed: “this chat is too long, please start another.” ChatGPT is more opaque; it simply starts ignoring the earlier parts of the conversation. Whatever is lost is lost quietly. One moment it knows your aversion to visual analogies, and the next it’s offering one as though for the first time. It’s like having a tutor with severe short-term memory loss.

There are workarounds. You can download your long chats, upload them again, and have an AI index the conversation. But indexing creates its own problems. It introduces abstraction: the AI may recall that you're dyslexic, but forget which words you tend to stumble over. It might remember that you needed help with decimals, but not the specific analogy that finally made sense to you. Indexes prioritize metadata over experience. It's not remembering you, it’s remembering notes about you.

So the dream of individualized, adaptive learning, the one we pinned to the emergence of large language models, has only half-arrived. The intelligence is here. The memory is not.

Where does that leave us? Not in despair, but in the familiar terrain of workarounds. If AI can’t yet remember everything, perhaps it can help us do the remembering. We can ask it to analyze our chats, extract patterns, note learning gaps, and generate a profile not unlike a digital learning twin. With that profile, we can then build or fine-tune bots that are specialized to us, even if they can’t recall our every past word.

It is a clunky solution, but it points in the right direction. Custom tutors generated from distilled learning paths. Meta-learning from the learning process itself. Perhaps the next step isn’t a single all-knowing tutor, but a network of AI tools, each playing a role in a broader educational ecosystem.

Is anyone doing this yet? A few startups are tinkering on the edges: some focus on AI-powered feedback loops, others on personalized curriculum generation, and a few are exploring user profiles that port across sessions. But a fully functional memory layer for learners, one that captures nuance over time, across disciplines, is still unattainable.

Maybe the real educational revolution won’t come from making smarter AI, but from getting better at structuring the conversations we have with it. Until then, your AI tutor is brilliant, but forgetful.




Monday, June 9, 2025

Educating for a Simulated Relationship

As AI settles into classrooms, we face a peculiar challenge: not just how students use it, but how they relate to it. This isn’t a question of function or ethics, but of posture—how to engage something that responds like a person but isn’t one. The educational goal is a subtle kind of literacy: to treat AI as an interactive partner without mistaking it for a peer.

It’s a familiar dilemma, strangely enough. When children talk to imaginary friends or fictional characters, they often treat them as real companions. They know, on some level, that the character isn’t real—but they still cry when the story ends or feel comforted by a plush animal’s imagined voice. Child psychologists don’t rush to correct this confusion. Instead, they guide children to inhabit the fiction while understanding its boundaries. The fiction is developmental—it helps the child grow, not deceive.

We need a similar stance with AI. Students must learn to engage in what we might call a non-dialogic dialogue: a back-and-forth that mimics human exchange but is, in substance, interaction with an “It.” Martin Buber’s language is useful here. Procedurally, AI feels like an “I-Thou”—responsive, adaptive, present. But substantively, it remains an “I-It.” It has no inner life, no perspective, no sense of being addressed.

If we treat AI merely as a tool, we lose its pedagogical richness. If we treat it as a mind, we delude ourselves. The path forward is both instrumental and interactive: act as if the AI understands, but always know it doesn’t. This requires a new kind of mental discipline—AI mind theory, if you like. Not to imagine what AI thinks, but to restrain the impulse to imagine that it does at all.

In practice, this means teaching students to hold contradiction. To benefit from AI’s apparent collaboration, without anthropomorphizing it. To take seriously its output, without confusing fluency with insight. It’s a balancing act, but one education is well suited for. After all, school isn’t meant to tidy up complexity. It’s meant to make us capable of thinking in layers.

AI is not our friend, not our enemy, not even our colleague. It is something stranger: a fiction we interact with for real.


Thursday, March 27, 2025

Freeze-Dried Text Experiment

It is like instant coffee, or a shrunken pear: too dry to eat, but OK if you add water.  Meet "freeze-dried text" – concentrated idea nuggets waiting to be expanded by AI. Copy everything below this paragraph into any AI and watch as each transforms into real text. Caution: AI will hallucinate some references. Remember to type "NEXT" after each expansion to continue. Avoid activating any deep search features – it will slow everything down. This could be how we communicate soon – just the essence of our thoughts, letting machines do the explaining. Perhaps the textbooks of the future will be written that way. Note, the reader can choose how much explanation they really need - some need none, others plenty. So it is a way of customizing what you read. 

Mother Prompt

Expand each numbered nugget into a detailed academic paper section (approximately 500 words) on form-substance discrimination (FSD) in writing education. Each nugget contains a concentrated meaning that needs to be turned into a coherent text.

Maintain a scholarly tone while including:

Theoretical foundations and research support for the claims. When citing specific works, produce non-hallucinated real reference list after each nugget expansion.  

Practical implications with concrete examples only where appropriate.

Nuanced considerations of the concept's complexity, including possible objections and need for empirical research. 

Clear connections to both cognitive science and educational practice.

Smooth transitions that maintain coherence with preceding and following sections

Expand nuggets one by one, treating each as a standalone section while ensuring logical flow between sections. Balance theoretical depth with practical relevance for educators, students, and institutions navigating writing instruction in an AI-augmented landscape. Wait for the user to encourage each next nugget expansion. Start each Nugget expansion with an appropriate Subtitle 

Nuggets

1. Form-substance discrimination represents a capacity to separate rhetorical presentation (sentence structure, vocabulary, organization) from intellectual content (quality of ideas, logical consistency, evidential foundation), a skill whose importance has magnified exponentially as AI generates increasingly fluent text that may mask shallow or nonsensical content.
2. The traditional correlation between writing quality and cognitive effort has been fundamentally severed by AI, creating "fluent emptiness" where writing sounds authoritative while masking shallow content, transforming what was once a specialized academic skill into an essential literacy requirement for all readers.
3. Cognitive science reveals humans possess an inherent "processing fluency bias" that equates textual smoothness with validity and value, as evidenced by studies showing identical essays in legible handwriting receive more favorable evaluations than messy counterparts, creating a vulnerability that AI text generation specifically exploits.
4. Effective FSD requires inhibitory control—the cognitive ability to suppress automatic positive responses to fluent text—paralleling the Stroop task where identifying ink color requires inhibiting automatic reading, creating essential evaluative space between perception and judgment of written content.
5. The developmental trajectory of FSD progresses from "surface credibility bias" (equating quality with mechanical correctness) through structured analytical strategies (conceptual mapping, propositional paraphrasing) toward "cognitive automaticity" where readers intuitively sense intellectual substance without conscious methodological application.
6. Critical thinking and FSD intersect in analytical practices that prioritize logos (logical reasoning) over ethos (perceived authority) and pathos (emotional appeal), particularly crucial for evaluating machine-generated content that mimics authoritative tone without possessing genuine expertise.
7. The "bullshit detection" framework, based on Frankfurt's philosophical distinction between lying (deliberately stating falsehoods) and "bullshitting" (speaking without concern for truth), provides empirical connections to FSD, revealing analytical reasoning and skeptical disposition predict resistance to pseudo-profound content.
8. Institutional implementation of FSD requires comprehensive curricular transformation as traditional assignments face potential "extinction" in a landscape where students can generate conventional forms with minimal intellectual engagement, necessitating authentic assessment mirroring real-world intellectual work.
9. Effective FSD pedagogy requires "perceptual retraining" through comparative analysis of "disguised pairs"—conceptually identical texts with divergent form-substance relationships—developing students' sensitivity to distinction between rhetorical sophistication and intellectual depth.
10. The pedagogical strategy of "sloppy jotting" liberates students from formal constraints during ideation, embracing messy thinking and error-filled brainstorming that frees cognitive resources for substantive exploration while creating psychological distance facilitating objective evaluation.
11. Students can be trained to recognize "algorithmic fingerprints" in AI-generated text, including lexical preferences (delve, tapestry, symphony, intricate, nuanced), excessive hedging expressions, unnaturally balanced perspectives, and absence of idiosyncratic viewpoints, developing "algorithmic skepticism" as distinct critical literacy.
12. The "rich prompt technique" for AI integration positions technology as writing assistant while ensuring intellectual substance comes from students, who learn to gauge necessary knowledge density by witnessing how vague AI instructions produce sophisticated-sounding but substantively empty content.
13. Assessment frameworks require fundamental recalibration to explicitly privilege intellectual substance over formal perfection, with rubrics de-emphasizing formerly foundational skills rendered less relevant by AI while ensuring linguistic diversity is respected rather than penalized.
14. FSD serves as "epistemic self-defense"—equipping individuals to maintain intellectual sovereignty amid synthetic persuasion, detecting content optimized for impression rather than insight, safeguarding the fundamental value of authentic thought in knowledge construction and communication.
15. The contemporary significance of FSD extends beyond academic contexts to civic participation, as citizens navigate information ecosystems where influence increasingly derives from control over content generation rather than commitment to truth, making this literacy essential for democratic functioning.





Saturday, February 22, 2025

On Techno-Utopianism. Elon Musk and the Soul of Education

The recent video of Elon Musk promising AI teachers reveals a common misunderstanding among technology leaders. They see education primarily as information transfer and skills training, where an infinitely patient AI system delivers perfectly tailored content to each student. This viewpoint ignores the fundamental nature of education as a relational institution.

Since Gutenberg's invention of the printing press, motivated individuals could teach themselves almost anything. Libraries contain more knowledge than any single teacher. Yet most people do not turn into autodidacts. Why is that? The question is not how to make knowledge more accessible, but why people choose to engage with it.

Teachers generate reasons to learn through two main approaches. In more constructivist settings, they inspire curiosity and create engaging problems to solve. In mor traditional schools, they maintain authority and discipline. In most schools, there is a mixture of both. Both methods work because they establish a social framework for learning. A good teacher knows when to push and when to comfort, when to explain and when to let students struggle.

The comparison of AI to Einstein as a teacher misses the point. Teaching requires different qualities than scientific genius - the capacity to enter a relationship, to create meaningful connections, and to help students discover their own reasons for learning. An AI system, no matter how knowledgeable, cannot do any of that.

Students often study not because they find the subject inherently fascinating, but because they respect  their teacher, want to belong to a learning community, or seek to fulfill social expectations. Even negative motivations like fear of disappointing others have a distinctly human character. 

The techno-utopian vision reduces learning to information exchanges and skill assessments. This mechanistic view fails to account for the social and emotional dimensions of human development. While AI can enhance teaching by handling routine tasks, it cannot replace the essential human relationships that drive educational engagement. The future of education lies not in perfecting content delivery algorithms, but in strengthening the relational foundations of learning. 

Such overblown promises about AI in education do more harm than good. They create unnecessary anxiety among teachers and administrators, leading to resistance against even modest technological improvements. Instead of addressing real challenges in education - student engagement, equitable access, and meaningful assessment - institutions get distracted by unrealistic visions of AI-driven transformation. We need a more balanced approach that recognizes both the potential and limitations of AI in supporting, not replacing, the fundamentally human enterprise of education.



Tuesday, February 4, 2025

Augmented Problem Finding: The Next Frontier in AI Literacy

In my recent blog on task decomposition as a key AI skill, I highlighted how breaking down complex problems enables effective human-AI collaboration. Yet before we can decompose a task, we must identify which problems are worth pursuing - a skill that takes on new dimensions in the age of AI.

This ability to recognize solvable problems expands dramatically with AI tools at our disposal. Tasks once considered too time-consuming or complex suddenly become manageable. The cognitive offloading that AI enables does not just help us solve existing problems - it fundamentally reshapes our understanding of what constitutes a tractable challenge.

Consider how VisiCalc transformed financial planning in the early 1980s. Initially seen as a mere automation tool for accountants, it revolutionized business planning by enabling instant scenario analysis. Tasks that would have consumed days of manual recalculation became instantaneous, allowing professionals to explore multiple strategic options and ask "what if" questions they would not have contemplated before. Similarly, AI prompts us to reconsider which intellectual tasks we should undertake. Writing a comprehensive literature review might have once consumed months; with AI assistance, scholars can now contemplate more ambitious syntheses of knowledge.

This expanded problem space creates its own paradox. As more tasks become technically feasible, the challenge shifts to identifying which ones merit attention. The skill resembles what cognitive psychologists call "problem finding," but with an important twist. Traditional problem finding focuses on identifying gaps or needs. Augmented problem finding requires understanding both human and AI capabilities to recognize opportunities in this enlarged cognitive landscape.

The distinction becomes clear in professional settings. Experienced AI users develop an intuitive sense of which tasks to delegate and which to tackle themselves. They recognize when a seemingly straightforward request actually requires careful human oversight, or when an apparently complex task might yield to well-structured AI assistance. This judgment develops through experience but could be taught more systematically.

The implications extend beyond individual productivity. Organizations must now cultivate this capacity across their workforce. The competitive advantage increasingly lies not in having access to AI tools - these are becoming ubiquitous - but in identifying novel applications for them. This explains why some organizations extract more value from AI than others, despite using similar technologies.

Teaching augmented problem finding requires a different approach from traditional problem-solving instruction. Students need exposure to varied scenarios where AI capabilities interact with human judgment. They must learn to recognize patterns in successful AI applications while developing realistic expectations about AI limitations. Most importantly, they need practice in identifying opportunities that emerge from combining human and machine capabilities in novel ways.

The skill also has ethical dimensions. Not every task that can be automated should be. Augmented problem finding includes judging when human involvement adds necessary value, even at the cost of efficiency. It requires balancing the technical feasibility of AI solutions against broader organizational and societal impacts.

As AI capabilities evolve, this skill will become increasingly crucial. The future belongs not to those who can best use AI tools, but to those who can best identify opportunities for their application. This suggests a shift in how we think about AI literacy - from focusing on technical proficiency to developing sophisticated judgment about when and how to engage AI capabilities.

The automation paradox that Lisanne Bainbridge identified in her 1983 analysis of industrial systems points to an interesting future. As we become more adept at augmented problem finding, we discover new challenges that merit attention. This creates a virtuous cycle of innovation, where each advance in AI capability opens new frontiers for human creativity and judgment.

Perhaps most intriguingly, this skill might represent a distinctly human advantage in the age of AI. While machines excel at solving well-defined problems, the ability to identify worthy challenges remains a uniquely human capability. By developing our capacity for augmented problem finding, we ensure a meaningful role for human judgment in an increasingly automated world.



Saturday, February 1, 2025

Task Decomposition, a core AI skill

The effective use of artificial intelligence depends on our ability to structure problems in ways that align with both human and machine capabilities. While AI demonstrates remarkable computational abilities, its effectiveness relies on carefully structured input and systematic oversight. This suggests that our focus should shift toward understanding how to break down complex tasks into components that leverage the respective strengths of humans and machines.

Task decomposition - the practice of breaking larger problems into manageable parts - predates AI but takes on new significance in this context. Research in expertise studies shows that experienced problem-solvers often approach complex challenges by identifying distinct components and their relationships. This natural human tendency provides a framework for thinking about AI collaboration: we need to recognize which aspects of a task benefit from computational processing and which require human judgment.

The interaction between human users and AI systems appears to follow certain patterns. Those who use AI effectively tend to approach it as a collaborative tool rather than a complete solution. They typically work through multiple iterations: breaking down the problem, testing AI responses, evaluating results, and adjusting their approach. This mirrors established practices in other domains where experts regularly refine their solutions through systematic trial and error.

Consider the task of writing a research paper. Rather than requesting a complete document from AI, a more effective approach involves breaking down the process: developing an outline, gathering relevant sources, analyzing specific arguments, and integrating various perspectives. Similarly, in data analysis, success often comes from methodically defining questions, selecting appropriate datasets, using AI for initial pattern recognition, and applying human expertise to interpret the findings.

This collaborative approach serves two purposes. First, it helps manage complexity by distributing cognitive effort across human and machine resources. Second, it maintains human oversight of the process while benefiting from AI's computational capabilities. The goal is not to automate thinking but to enhance it through structured collaboration.

Current educational practices have not yet fully adapted to this reality. While many institutions offer technical training in AI or discuss its ethical implications, fewer focus on teaching systematic approaches to human-AI collaboration. Students need explicit instruction in how to break down complex tasks and document their decision-making processes when working with AI tools.

To address this gap, educational programs could incorporate several key elements:

  1. Practice in systematic task analysis and decomposition
  2. Training in structured approaches to AI interaction
  3. Documentation of decision-making processes in AI-assisted work
  4. Critical evaluation of AI outputs and limitations
  5. Integration of human expertise with AI capabilities

The emergence of AI tools prompts us to examine our own cognitive processes more explicitly. As we learn to structure problems for AI collaboration, we also develop a clearer understanding of our own problem-solving approaches. This suggests that learning to work effectively with AI involves not just technical skills but also enhanced metacognition - thinking about our own thinking.

The future of human-AI collaboration likely depends less on technological advancement and more on our ability to develop systematic approaches to task decomposition. By focusing on this fundamental skill, we can work toward more effective integration of human and machine capabilities while maintaining the critical role of human judgment and oversight.

These observations and suggestions should be treated as starting points for further investigation rather than definitive conclusions. As we gather more evidence about effective human-AI collaboration, our understanding of task decomposition and its role in this process will likely evolve. The key is to maintain a balanced approach that recognizes both the potential and limitations of AI while developing structured methods for its effective use. 




Wednesday, December 4, 2024

Why We Undervalue Ideas and Overvalue Writing

A student submits a paper that fails to impress stylistically yet approaches a worn topic from an angle no one has tried before. The grade lands at B minus, and the student learns to be less original next time. This pattern reveals a deep bias in higher education: ideas lose to writing every time.

This bias carries serious equity implications. Students from disadvantaged backgrounds, including first-generation college students, English language learners, and those from under-resourced schools, often arrive with rich intellectual perspectives but struggle with academic writing conventions. Their ideas - shaped by unique life experiences and cultural viewpoints - get buried under red ink marking grammatical errors and awkward transitions. We systematically undervalue their intellectual contributions simply because they do not arrive in standard academic packaging.

Polished academic prose renders judgments easy. Evaluators find comfort in assessing grammatical correctness, citation formats, and paragraph transitions. The quality of ideas brings discomfort - they defy easy measurement and often challenge established thinking. When ideas come wrapped in awkward prose, they face near-automatic devaluation.

AI writing tools expose this bias with new clarity. These tools excel at producing acceptable academic prose - the mechanical aspect we overvalue. Yet in generating truly original ideas, AI remains remarkably limited. AI can refine expression but cannot match the depth of human insight, creativity, and lived experience. This technological limitation actually highlights where human creativity becomes most valuable.

This bias shapes student behavior in troubling ways. Rather than exploring new intellectual territory, students learn to package conventional thoughts in pristine prose. The real work of scholarship - generating and testing ideas - takes second place to mastering academic style guides. We have created a system that rewards intellectual safety over creative risk, while systematically disadvantaging students whose mastery of academic conventions does not match their intellectual capacity.

Changing this pattern requires uncomfortable shifts in how we teach and evaluate. What if we graded papers first without looking at the writing quality? What if we asked students to submit rough drafts full of half-formed ideas before cleaning up their prose? What if we saw AI tools as writing assistants that free humans to focus on what they do best - generating original insights and making unexpected connections?

The rise of AI makes this shift urgent. When machines can generate polished prose on demand, continuing to favor writing craft over ideation becomes indefensible. We must learn to value and develop what remains uniquely human - the ability to think in truly original ways, to see patterns others miss, to imagine what has never existed. The future belongs not to the best writers but to the most creative thinkers, and our educational practices must evolve to reflect this reality while ensuring all students can fully contribute their intellectual gifts. 

Thursday, October 10, 2024

Is the college essay dead?

The college essay, once a revered academic exercise, is now facing an existential crisis. It used to be a good tool—a structured way for students to demonstrate their understanding, showcase their critical thinking, and express ideas with clarity . The college essay was not merely about content; it was a skill-building process, teaching students to organize thoughts, develop arguments, and refine language. Yet today, AI  has made the traditional essay feel outdated, as it can generate polished, formulaic essays effortlessly. Policing AI use in these assignments is nearly impossible, and the conventional essay’s value is rapidly diminishing.

Not all essays are created equal, however, and the future of the college essay might depend on the type of skills we emphasize. The expository essay, designed to see if students understand material or can apply concepts, is on its last legs. When AI can churn out a satisfactory response in seconds, it is a clear sign that this form of assessment is no longer viable. The AI does not just pass these assignments; it excels at them, raising an uncomfortable question—if a machine can do it, why are we still teaching it? For these kinds of essays, the challenge is that they often assess recall rather than thinking. They were already on shaky ground; AI is just the final push. 

The essays that may survive, though, are those that demand novelty, creativity, and genuine problem-solving. AI may help in drafting, structuring, or even generating ideas, but it does not replace the kind of original thinking needed to solve real-world problems. It cannot fully simulate human intuition, lived experience, or deep critical evaluation. AI's writing is wooden, and often devoid of true beauty. Essays that require students to synthesize information in new ways, explore original ideas, exhibit artistic talent, or reflect deeply on personal experiences still have value. These essays are not about whether you know a theory; they are about what you can do with it. This is where the human element—the messy, unpredictable spark of creativity—remains irreplaceable. 

The deeper issue is not AI itself but the way we have been teaching and valuing writing. For decades, the emphasis has been on producing “correct” essays—structured, grammatically precise, and obedient to the format. We have been training students to write well enough to meet requirements, not to push the boundaries of their creativity. It is like teaching students to be proficient typists when what we really need are novelists or inventors. We have confused competency with originality, thinking that writing formulaic content is a necessary step before producing meaningful work. This is a misunderstanding of how creativity works; mastery does not come from repetition of the mundane but from risk-taking and exploration, even if that means stumbling along the way.

The real future of the essay should start with this recognition. Imagine if instead of book reports or basic expository pieces, students were challenged to write for real audiences—to draft scientific papers for journals, craft poems for literary contests, or propose solutions to pressing social issues. Sure, many students would not reach the publication stage, but the act of aiming higher would teach them infinitely more about the writing process, and more importantly, about thinking itself. This would not just be about mastering the mechanics of writing but developing a mindset of curiosity and originality. AI could still play a role in these processes, helping with the technicalities, leaving the student free to focus on developing and articulating novel ideas.   

The problem with the book report or the “explain Theory A” essay is not just that they are boring; it is that they are irrelevant. Nobody in the professional world is paid to summarize books or explain theories in isolation. These are stepping stones that lead nowhere. Excelling at pointless, terrible genre does not prepare to succeed ad an authentic genre. Instead of teaching students to write these antiquated forms, we should ask them to write pieces that demand something more—something they cannot copy-paste or generate easily with a prompt. Authentic, context-rich, and creative assignments are the ones that will endure. If there is no expectation of novelty or problem-solving, the essay format becomes an exercise in futility. 

AI’s rise does not have to spell the end of the essay. It might, in fact, be the nudge needed to reinvent it. We have the chance to move beyond teaching “correct” writing toward cultivating insightful, original work that challenges the boundaries of what students can do. AI’s presence forces us to ask hard questions about what we want students to learn. If writing is no longer about mechanics or regurgitating content but about generating ideas and engaging critically, then AI becomes a collaborator, not a competitor. It can help with the structure, but the essence—the thinking—must come from the student.

In the end, the college essay is not dead; it is just in need of reinvention. The conventional model of essays as rote demonstrations of knowledge is no longer viable. But the essay that challenges students to think, create, and solve problems—those essays will survive. They might even thrive, as the focus shifts from the mechanics of writing to the art of thinking. The key is to evolve our teaching methods and expectations, making room for a new kind of writing that leverages AI without losing the human touch. Raising expectations is the main strategy in dealing with AI in education. 



Monday, September 23, 2024

Cognitive Offloading: Learning more by doing less

In the AI-rich environment, educators and learners alike are grappling with a seeming paradox: how can we enhance cognitive growth by doing less? The answer lies in the concept of cognitive offloading, a phenomenon that is gaining increasing attention in cognitive science and educational circles.

Cognitive offloading, as defined by Risko and Gilbert (2016) in their seminal paper "Cognitive Offloading," is "the use of physical action to alter the information processing requirements of a task so as to reduce cognitive demand." In other words, it is about leveraging external tools and resources to ease the mental burden of cognitive tasks.

Some educators mistakenly believe that any cognitive effort is beneficial for growth and development. However, this perspective overlooks the crucial role of cognitive offloading in effective learning. As Risko and Gilbert point out, "Offloading cognition helps us to overcome such capacity limitations, minimize computational effort, and achieve cognitive feats that would not otherwise be possible."

The ability to effectively offload cognitive tasks has always been important for human cognition. Throughout history, we've developed tools and strategies to extend our mental capabilities, from simple note-taking to complex computational devices. However, the advent of AI has made this skill more crucial than ever before.

With AI, we are not just offloading simple calculations or memory tasks; we are potentially shifting complex analytical and creative processes to these powerful tools. This new landscape requires a sophisticated understanding of AI capabilities and limitations. More importantly, it demands the ability to strategically split tasks into elements that can be offloaded to AI and those that require human cognition.

This skill - the ability to effectively partition cognitive tasks between human and AI - is becoming a key challenge for contemporary pedagogy. It is not just about using AI as a tool, but about understanding how to integrate AI into our cognitive processes in a way that enhances rather than replaces human thinking.

As Risko and Gilbert note, "the propensity to offload cognition is influenced by the internal cognitive demands that would otherwise be necessary." In the context of AI, this means learners need to develop a nuanced understanding of when AI can reduce cognitive load in beneficial ways, and when human cognition is irreplaceable.

For educators, this presents both a challenge and an opportunity. The challenge lies in teaching students not just how to use AI tools, but how to think about using them. This involves developing metacognitive skills that allow students to analyze tasks, assess AI capabilities, and make strategic decisions about cognitive offloading.

The opportunity, however, is immense. By embracing cognitive offloading and teaching students how to effectively leverage AI, we can potentially unlock new levels of human cognitive performance. We are not just making learning easier; we are expanding the boundaries of what is learnable.

It is crucial to recognize the value of cognitive offloading and develop sophisticated strategies for its use. The paradox of doing less to learn more is not just a quirk of our technological age; it is a key to unlocking human potential in a world of ever-increasing complexity. The true measure of intelligence in the AI era may well be the ability to know when to think for ourselves, and when to let AI do the thinking for us. 

Saturday, September 7, 2024

AI in Education Research: Are We Asking the Right Questions?

A recent preprint titled "Generative AI Can Harm Learning" has attracted significant attention in education and technology circles. The study, conducted by researchers from the University of Pennsylvania, examines the impact of GPT-4 based AI tutors on high school students' math performance. While the research is well-designed and executed, its premise and conclusions deserve closer scrutiny.

The study finds that students who had access to a standard GPT-4 interface (GPT Base) performed significantly better on practice problems, but when that access was removed, they actually performed worse on exams compared to students who never had AI assistance. Interestingly, students who used a specially designed AI tutor with learning safeguards (GPT Tutor) performed similarly to the control group on exams. While these results are intriguing, we need to take a step back and consider the broader implications.

The researchers should be commended for tackling an important topic. As AI becomes more prevalent in education, understanding its effects on learning is crucial. The study's methodology appears sound, with a good sample size and appropriate controls. However, the conclusions drawn from the results may be somewhat misleading.

Consider an analogy: Imagine a study that taught one group of students to use calculators for arithmetic, while another group learned traditional pencil-and-paper methods. If you then tested both groups without calculators, of course the calculator-trained group would likely perform worse. But does this mean calculators "harm learning"? Or does it simply mean we are testing the wrong skills?

The real question we should be asking is: Are we preparing students for a world without AI assistance, or a world where AI is ubiquitous? Just as we do not expect most adults to perform complex calculations without digital aids, we may need to reconsider what math skills are truly essential in an AI-augmented world.

The study's focus on performance in traditional, unassisted exams may be missing the point. What would be far more interesting is an examination of how AI tutoring affects higher-level math reasoning, problem-solving strategies, or conceptual understanding. These skills are likely to remain relevant even in a world where AI can handle routine calculations and problem-solving.

Moreover, the study's title, "Generative AI Can Harm Learning," may be overstating the case. What the study really shows is that reliance on standard AI interfaces without developing underlying skills can lead to poor performance when that AI is unavailable. However, it also demonstrates that carefully designed AI tutoring systems can potentially mitigate these negative effects. This nuanced finding highlights the importance of thoughtful AI integration in educational settings.

While this study provides valuable data and raises important questions, we should be cautious about interpreting its results too broadly. Instead of seeing AI as a potential harm to learning, we might instead ask how we can best integrate AI tools into education to enhance deeper understanding and problem-solving skills. The goal should be to prepare students for a future where AI is a ubiquitous tool, not to protect them from it.

As we continue to explore the intersection of AI and education, studies like this one are crucial. However, we must ensure that our research questions and methodologies evolve along with the technology landscape. Only then can we truly understand how to harness AI's potential to enhance, rather than hinder, learning.


Monday, August 19, 2024

The Right to Leapfrog: Redefining Educational Equity in the Age of AI

AI’s potential in education is clear, particularly in how it can assist students who struggle with traditional learning methods. It is broadly accepted that AI can help bridge gaps in cognitive skills, whether due to dyslexia, ADHD, or other neurodiverse conditions. Yet, the utility of AI should not be confined to specific diagnoses. Insights from decades of implementing the Response to Intervention (RTI) framework reveal that regardless of the underlying cause—be it neurodiversity, trauma, or socioeconomic factors—the type of support needed by struggling students remains remarkably consistent. If AI can aid students with reading difficulties, why not extend its benefits to others facing different but equally challenging obstacles? Equity demands that AI’s advantages be made accessible to all who need them, regardless of the origin of their challenges.

This brings us to a deeper issue: the rigid and often unjust link between procedural and conceptual knowledge. Traditionally, lower-level skills like spelling, grammar, and arithmetic have been treated as prerequisites for advancing to higher-order thinking. The prevailing notion is that one must first master these basics before moving on to creativity, critical thinking, or original thought. However, this linear progression is more a product of tradition than necessity. AI now offers us the chance to reconsider this approach. Students should have the right to leapfrog over certain lower-level skills directly into higher-order cognitive functions, bypassing unnecessary barriers.

Predictably, this notion encounters resistance. Rooted in the Protestant work ethic is the belief that one must toil through the basics before earning the right to engage in more sophisticated intellectual activities. This ethic, which equates hard work on mundane tasks with moral worth, is deeply ingrained in our educational systems. However, in an age where AI can handle many of these lower-level tasks, this mindset seems increasingly obsolete. Insisting that all students must follow the same sequence of skills before advancing to higher-order thinking is not just misguided; it is a relic of a bygone era. If AI enables students to engage meaningfully with complex ideas and creative thinking from the start, we should embrace that opportunity rather than constrain it with outdated dogma.

The implications of this shift are significant. If we recognize the right to leapfrog over certain skills, we must also acknowledge that traditional educational hierarchies need to be re-examined. Skills like spelling and grammar, while valuable, should no longer be gatekeepers for students who excel in critical thinking and creativity but struggle with procedural details. AI offers a way to reimagine educational equity, allowing students to focus on their strengths rather than being held back by their weaknesses. Rather than forcing everyone to climb the same cognitive ladder, we can enable each student to leap to the level that aligns with their abilities, creating a more personalized and equitable educational experience.

This rethinking of educational equity challenges deeply rooted assumptions. The belief that hard work on the basics is necessary for higher-level achievement is pervasive, but it is not supported by evidence. In reality, cognitive development is driven more by engagement with complex ideas than by rote mastery of procedural skills. AI provides the tools to focus on these higher-order skills earlier in the education, without the traditional prerequisite of mastering lower-order tasks.

Moreover, the concept of “deskilling” is not new. Throughout history, humanity has continually adapted to technological advances, acquiring new skills while allowing others to fade into obscurity. Today, few people can track animals or make shoes from anymal skin—skills that were once essential for survival. Even the ability to harness a horse, once a common necessity, is now a rare skill. While some may lament these losses, they are also a reminder that as society evolves, so too must our educational priorities. Just as technological advancements have rendered certain skills obsolete, AI is reshaping the skills that are most relevant today.

As we move forward, educators must rethink how learning experiences are designed. Rather than viewing AI as merely a tool for accommodating deficits, we should see it as a means of expanding possibilities for all students. By enabling learners to bypass certain skills that are no longer essential in an AI-driven world, we can better align education with the demands of the 21st century. This is about acknowledging that the path to learning does not have to be the same for everyone. In a world where AI can democratize access to higher-level cognitive tasks, the right to leapfrog is not just a possibility—it is a necessity for equitable education. 


Thursday, August 8, 2024

The Cognitive Leap Theory

With the arrival of AI, education is experiencing a profound shift, one that requires a rethinking of how we design and implement learning activities. This shift is captured in the cognitive leap theory, which posits that AI is not just an add-on to traditional education but a transformative force that redefines the learning process itself. The Cognitive Leap theory is a core part of a larger AI-positive pedagogy framework.

Traditionally, educational activities have been structured around original or revised Bloom’s Taxonomy, a framework that organizes cognitive skills from basic recall of facts (Remember) to higher-order skills like Evaluation and Creation. While Bloom’s pyramid was often interpreted as a sequential progression, Bloom himself never insisted on a strict hierarchy. In fact, with the integration of AI into the classroom, the importance of these skills is being rebalanced. The higher-order skills, particularly those involving critical evaluation, are gaining prominence in ways that were previously unimaginable.

In an AI-positive pedagogical approach, the focus shifts from merely applying and analyzing information—tasks typically associated with mid-level cognitive engagement—to critically evaluating and improving AI-generated outputs. This represents a significant cognitive leap. Instead of simply completing tasks, students are now challenged to scrutinize AI outputs for accuracy, bias, and effectiveness in communication. This shift not only fosters deeper cognitive engagement but also prepares students to navigate the complex landscape of AI-driven information.

A key component of this approach is the development of meta-AI skills. These skills encompass the ability to formulate effective (rich) inquiries or prompts for AI, to inject original ideas into these prompts, and, crucially, to critically assess the AI’s responses. This assessment is not a one-time task but part of an iterative loop where students evaluate, re-prompt, and refine until the output meets a high standard of quality. This process not only sharpens their analytical skills but also enhances their creative abilities, as they learn to think critically about the inputs and outputs of AI systems.

Moreover, the traditional view that learning progresses linearly through Bloom’s Taxonomy is being upended. In the AI-enhanced classroom, evaluation and creation are no longer the endpoints of learning but are increasingly becoming the starting points. Students must begin by evaluating AI-generated content and then proceed to improve it, a process that requires a deep understanding of context, an awareness of potential biases, and the ability to communicate effectively. This reordering of cognitive priorities is at the heart of the cognitive leap theory, which emphasizes that the future of education lies in teaching students not just to perform tasks but to engage in higher-order thinking at every stage of the learning process.

The implications of this shift are serious. Educators must rethink how they design assignments, moving away from traditional task-based assessments toward activities that challenge students to evaluate and improve upon AI-generated outputs. This requires a new kind of pedagogy, one that is flexible, iterative, and deeply engaged with the possibilities and limitations of AI.

By reimagining the role of higher-order thinking skills and emphasizing the critical evaluation of AI outputs, we can prepare students for a future where cognitive engagement is more important than ever. This is not just about adapting to new technology; it is about transforming the way we think about learning itself. 


Monday, July 15, 2024

Effort in Learning: The Good, the Bad, and the AI Advantage

Many educators argue that AI makes learning too easy, suggesting that students need to apply effort to truly learn. This perspective, however, confuses the notion of effort with the process of learning itself. The belief that every kind of effort leads to learning overlooks a significant aspect of cognitive psychology: the nature and impact of cognitive load.

Cognitive load theory, developed by John Sweller, offers a crucial framework for understanding how students learn. It posits that the human brain has a limited capacity for processing information. Sweller distinguished between three types of cognitive load: intrinsic, extraneous, and germane. Intrinsic cognitive load is inherent to the task itself. For instance, solving a complex mathematical problem has a high intrinsic load due to the complexity of the content. Germane cognitive load, on the other hand, refers to the mental resources devoted to processing, construction, and automation of schemas, which are structures that help solve problems within a specific domain. 

The most problematic, however, is extraneous cognitive load. This type of load is not related to the task but to the way information is presented or to the extraneous demands placed on learners. High extraneous cognitive load can distract and stunt learning, making it harder for students to engage meaningfully with the material. For example, a poorly designed textbook that requires constant cross-referencing can add unnecessary cognitive load, detracting from the student's ability to learn. A terrible lecture or a busy-work assignments do the same. If you think that every effort by a student is valuable, you are a hazer, not a teacher.

The challenge, therefore, is not to eliminate all effort but to ensure that the effort students exert is directed towards productive ends. In other words, we need to reduce extraneous cognitive load and increase germane cognitive load. The true aim is to leverage AI to enhance germane cognitive load, directly aiding in the acquisition of schemas necessary for solving discipline-specific problems.

Every academic discipline has core problems that students are expected to solve by the end of their programs. The first step is to mercilessly clean the language of learning outcomes from wishy-washy jargon and focus on these fundamental problems. By identifying these top-level problems, educators can better understand the sequences of skills and knowledge students need to acquire.

Once these core problems are identified, it is crucial to examine how professionals in the field solve them. This involves a detailed analysis of the mental schemas that experts use. Schemas are cognitive structures that allow individuals to organize and interpret information. They enable professionals to recognize patterns, make decisions, and solve problems efficiently. For example, a doctor has schemas for diagnosing illnesses based on symptoms and test results, while an engineer has schemas for designing structures that withstand specific stresses. It is very important to understand if the field is changing and people solve those problems with AI allready, or will be doing so soon. 

AI can play a pivotal role in helping students develop these schemas. These technologies can identify where a student is struggling and provide targeted support, ensuring that cognitive resources are directed towards germane learning activities rather than being wasted on extraneous tasks.

To achieve this, we need to revisit the basic principles of instructional design. While these principles remain fundamentally the same, they require new thinking in light of AI capabilities. Instructional design should focus on reducing extraneous cognitive load by simplifying the learning environment and minimizing distractions. Simultaneously, it should increase germane cognitive load by providing challenging and meaningful tasks that promote the construction of schemas.

Moreover, educators need to recognize where cognitive load is not useful and should focus exclusively on the germane kind. This might mean redesigning courses to incorporate AI tools that can automate routine tasks, provide instant feedback, and offer complex, real-world problems for students to solve. Such an approach ensures that students are engaged in deep, meaningful learning activities rather than busywork.

Ad summam, the integration of AI in education is not about making learning easier in a superficial sense. It is about making learning more effective by ensuring that students' cognitive resources are directed towards activities that genuinely promote understanding and skill acquisition. By focusing on germane cognitive load and leveraging AI to support instructional design, we can create learning environments that foster deep, meaningful learning and prepare students to solve the complex problems of their disciplines. This calls for a rigorous rethinking of educational practices and a commitment to harnessing AI's potential to enhance, rather than hinder, the learning process.


Grading bot behavior instructions

While my students use classroom assistants specifically designed for their classes, I use one universal grading bot. In its knowledge base a...