Search This Blog

Wednesday, October 29, 2025

Beating the Robot Is the Point (and the Pedagogy)

A pivotal moment in any course involving artificial intelligence comes when students try, and succeed, in beating the robot. I do not mean cheating the system or outsmarting the instructor. I mean learning how to identify something AI does poorly, and then doing it better.

Many students, especially undergraduates, approach AI with exaggerated reverence. They assume the output is authoritative and final. AI writes with confidence, speed, and often impressive fluency. The effect is almost hypnotic. This creates a psychological barrier: if the machine does it well, what is left for me to do? Am I smart enough to compete? 

This assumption is wrong, but not irrational. It takes cognitive effort to move beyond awe toward critique. The breakthrough moment occurs when a student notices a flaw. Sometimes it is a factual error, but more often it is a subtle lack; an absence of argument, weak nuance, robotic phrasing, or flat tone. Students realize, for the first time, that the AI is not a better version of themselves. It is something different. It is stronger in language processing but weaker in creativity, authenticity, judgment, insight, or affect.

This realization is not theoretical. It is a variant of self-efficacy, but more specific and applied. Classic self-efficacy theory describes the conviction that one is capable of performing a task. What occurs in the classroom with AI is more nuanced. Students do not just believe they can do something. They discover what, exactly, they can do better than the machine. This is a kind of enhanced self-efficacy; focused not on general ability, but on identifying one's own unique niche of competence. It is confidence through contrast.

To beat the robot, one must first learn to challenge it. That could mean prompting it more cleverly, iterating multiple drafts, or simply refusing to accept its first answer. Students begin to demand more. They ask, “What is missing here?” or “Can this be said better?” The AI becomes a foil, not a teacher. That shift is vital.

There are students who reach this point quickly. They treat AI as a flawed collaborator and instinctively wrestle with its output. But many do not. For them, scaffolding is necessary. They must be taught how to critique the machine. They must be shown examples of mediocre AI-generated work and invited to improve it. This is not a lesson about ethics or plagiarism. It is a lesson about confidence.

Cognitive load helps explain why some students freeze in front of AI. The interface appears simple, but the mental task is complex: reading critically (and AI is often verbose), prompting strategically, evaluating output, and iterating; all while managing anxiety about technology. The extraneous load is high, especially for those who are not fluent writers. But once a student identifies one specific area, such as tone, logic, detail, where they outperform the machine, they begin to reclaim agency. That is the learning goal. I sometimes explain it to them as moving from the passenger seat to the driver's seat. 

This is not the death of authorship. It is its rebirth under new conditions. Authorship now includes orchestration: deciding when and how to use AI, and how to push past its limitations. This is a higher-order skill. It resembles conducting, not composing. But the cognitive work is no less real.

Educators must design for this. Assignments should not simply allow AI use; they should require it. But more importantly, they should require critique of AI. Where is it wrong? Where is it boring? Where does it miss the point? Students should be evaluated not on how well they mimic AI but on how well they improve it.

Some students will resist. A few may never get there. But most will, especially if we frame the challenge not as compliance, but as competition. Beating the robot is possible. In fact, it is the point. It is how students learn to see themselves not as users of tools but as thinkers with judgment. The robot is fast, but it is not wise. That is where we come in.



Thursday, October 23, 2025

AI Doesn’t Kill Learning, and I Can Prove It

There’s a curious misconception floating around, whispered by skeptics, shouted by cynics: that letting students use AI in their coursework flattens the learning curve. That it replaces thinking. That it reduces education to a copy-paste exercise. If everyone has the same tool, the logic goes, outcomes must converge. But that’s not what happens. Not even close.

In three separate university classes, I removed all restrictions on AI use. Not only were students allowed to use large language models, they were taught how to use them well. Context input, prompts, revision loops, scaffolding, argument development; the full toolbox. Each group has a customized AI assistant tailored to the course content. Everyone had the same access. Everyone knows the rules. And yet the difference in what they produced is quite large.

Some students barely improved. Others soared. The quality of work diverged wildly, not only in polish but in depth, originality, and complexity. It didn’t take long to see what was happening. The AI was a mirror, not a mask. It didn’t hide student ability and effort; it amplified both. Whatever a student brought to the interaction, curiosity, discipline, intellectual courage, determined how far they could go.

This isn’t hypothetical. It’s empirical. I grade every week. I see the evidence.

When given a routine assignment, generative AI can do a decent job. A vanilla college essay? Sure, it’ll pass. But I don’t assign vanilla. One of my standard assignments asks undergraduates to write a paper worthy of publication. Not in a class blog or a campus magazine; a real, peer-reviewed publication.

You might think that’s too much to ask. And yes, if the bar is “can a chatbot imitate academic tone and throw citations at a thesis,” then yes, AI can fake it. But a publishable paper requires more than tone. It requires original framing, precise argumentation, contextual awareness, and methodological discipline. No prompt can do all that. It requires a human mind, inexperienced, perhaps, but willing to stretch.

And stretch they do. Some of these students, undergrads and grads, manage to channel the AI into something greater than the sum of its parts. They write drafts with the machine, then rewrite against it. They argue with the chatbot, they question its logic, they override its flattening instincts. They edit not for grammar but for clarity of thought. AI is their writing partner, not their ghostwriter.

This is where it gets interesting. The assumption that AI automates thinking misses the point. In education, AI reveals the higher order thinking. When you push students to do something AI can’t do alone (create, synthesize, critique), then the gaps between them start to matter. And those gaps are good. They are evidence of growth.

Variance in output isn’t a failure of the method. It’s the metric of its success. If human participation did not matter, there would not be variance in output. In complex tasks, human input is critical, and that's the only explanation for this large variance. 

And in that environment, the signal is clear: where there is variance in performance, there is possibility of growth. And where there is growth, there is plenty of room for teaching and learning. 

Prove me wrong.


Saturday, October 11, 2025

Innovation doesn’t need a faster engine

The doomsayers of AI are having their moment. They correctly point out that the rapid progress of large language models has slowed. Context windows remain limited, hallucinations persist, and bigger models no longer guarantee smarter ones. From this, they conclude that the age of AI breakthroughs is ending.

They are mistaking the engine for the journey.

History offers many parallels. When the internal combustion engine stopped getting dramatically better, innovation didn’t stop. That was when it really started. The real transformation came from everything built around it: road networks, trucking logistics, suburbs, the global supply chain. Likewise, the shipping container changed the world not through further improvements, but because it became the standard that reshaped ports, labor systems, and trade. When the core technology stabilizes, people finally start reimagining what to do with it.

This is the point we’ve reached with AI. The models are powerful, but most of their potential remains untouched. Businesses are still treating AI as a novelty, something to sprinkle on top of existing processes. Education systems, government workflows, healthcare administration; these are built as if nothing new has happened. We haven’t even begun to redesign for a world where everyone has a competent digital assistant.

The real question is not whether an AI can pass a medical exam. It’s how we organize diagnosis and care when every doctor has instant access to thousands of case studies. It’s not about whether an AI can draft an email. It’s about how office communication changes when routine writing takes seconds. The innovation now lies in application, not invention.

Limits are not the enemy. In fact, recognizing limits often helps creativity flourish. When designers accept that screen size on phones is fixed, they find smarter interfaces. We become inventive when the boundaries are clear. The same will happen with AI once we stop waiting for miracle upgrades and start asking better questions.

The real bottleneck is attention. Investment still flows heavily into training larger and larger models, chasing diminishing returns. Meanwhile, the tools that would actually change how people work or learn get far less support. It’s as if we are building faster trains while neglecting the tracks, stations, and maps.

There is a similar problem in education, where energy goes into protecting the structure of institutions while ignoring how learning could be improved. Just because we can do something well does not mean it is worth doing. And just because AI researchers can build a bigger model does not mean they should.

The most meaningful innovation is ready to happen. It is no longer about raw power, but about redesign. Once we shift our focus from models to uses, the next revolution begins.



Wednesday, October 1, 2025

FPAR: The Cycle That Makes AI Writing Actually Work

Students don’t need help accessing ChatGPT. They need help using it well. What’s missing from most writing instruction right now is not awareness of the tool but a habit, a skill of active, productive engagement with AI to replace passive, lazy consumption of AI-generated information. They need FPAR: Frame, Prompt, Assess, Revise.

Frame the task before asking for help. This means uploading or pasting in anything that helps the AI understand the assignment. That might be a rough draft, but it could just as easily be a Wikipedia article, a class reading, a news story, a research report, a course syllabus, or even a transcript of a group discussion. Anything that offers context helps the AI respond more intelligently. For a research paper, pasting in a background source (like an article the student is drawing on) can guide the AI to suggest better angles, examples, or questions. Even a confusing assignment prompt becomes more useful when paired with, say, a class chat where the professor explained it. The point is to stop treating AI like a mind reader. The more the student frames the task, the better the result.

Prompt with clarity. Vague questions get vague answers. Instead of saying “Fix this,” students should learn to be specific: “Cut this to 150 words without losing the argument,” or “Rephrase this so it sounds more like a high school student and less like a Wikipedia article.” A direct, useful prompt might be: “Write an intro for my paper; the main idea is that in surveillance programs, the limitations of technology end up creating an implicit policy that is more powerful than the actual law. Programmers may not intend to make policy, but in practice, they do.” If they want more ideas, they should ask for them. If they want structure, examples, tone shifts, or even counterarguments, they need to say so. A good prompt isn’t a wish; it’s a directive.

Assess critically. The most dangerous moment is when the AI gives back something that sounds good. That’s when students tend to relax and stop thinking. But sounding fluent isn’t the same as being insightful. They need to read the response like a skeptic: Did it actually answer the question? Did it preserve the original point? Did it flatten nuance or introduce new assumptions? If the student asked for help making their argument more persuasive, did it just sprinkle in some confident phrases or actually improve the logic? Every AI-generated revision should ALWAYS be interrogated, not accepted.

Revise intentionally. Once they’ve assessed the output, students should guide the next step. They might say, “That example works, but now the paragraph feels too long. Can you trim the setup?” or “Now add a rebuttal to this counterpoint.” Revision is where the conversation starts to get interesting. It’s also where students start to develop judgment, voice, and control, because they’re not just reacting to feedback, they’re directing it.

And then they go back to Frame. The cycle repeats, each time with sharper context, better prompts, more refined questions. FPAR is not just a strategy for using AI; it’s a structure for thinking. It builds habits of iteration, reflection, and specificity. It turns the student from a passive consumer into an active writer.

Most bad AI writing isn’t the fault of the model. It’s the result of unclear framing, lazy prompting, uncritical acceptance, and shallow revision. The antidote isn’t banning the tool. It’s teaching students how to use it with care. FPAR is how. 



Grading bot behavior instructions

While my students use classroom assistants specifically designed for their classes, I use one universal grading bot. In its knowledge base a...