Students don’t need help accessing ChatGPT. They need help using it well. What’s missing from most writing instruction right now is not awareness of the tool but a habit, a skill of active, productive engagement with AI to replace passive, lazy consumption of AI-generated information. They need FPAR: Frame, Prompt, Assess, Revise.
Frame the task before asking for help. This means uploading or pasting in anything that helps the AI understand the assignment. That might be a rough draft, but it could just as easily be a Wikipedia article, a class reading, a news story, a research report, a course syllabus, or even a transcript of a group discussion. Anything that offers context helps the AI respond more intelligently. For a research paper, pasting in a background source (like an article the student is drawing on) can guide the AI to suggest better angles, examples, or questions. Even a confusing assignment prompt becomes more useful when paired with, say, a class chat where the professor explained it. The point is to stop treating AI like a mind reader. The more the student frames the task, the better the result.
Prompt with clarity. Vague questions get vague answers. Instead of saying “Fix this,” students should learn to be specific: “Cut this to 150 words without losing the argument,” or “Rephrase this so it sounds more like a high school student and less like a Wikipedia article.” A direct, useful prompt might be: “Write an intro for my paper; the main idea is that in surveillance programs, the limitations of technology end up creating an implicit policy that is more powerful than the actual law. Programmers may not intend to make policy, but in practice, they do.” If they want more ideas, they should ask for them. If they want structure, examples, tone shifts, or even counterarguments, they need to say so. A good prompt isn’t a wish; it’s a directive.
Assess critically. The most dangerous moment is when the AI gives back something that sounds good. That’s when students tend to relax and stop thinking. But sounding fluent isn’t the same as being insightful. They need to read the response like a skeptic: Did it actually answer the question? Did it preserve the original point? Did it flatten nuance or introduce new assumptions? If the student asked for help making their argument more persuasive, did it just sprinkle in some confident phrases or actually improve the logic? Every AI-generated revision should ALWAYS be interrogated, not accepted.
Revise intentionally. Once they’ve assessed the output, students should guide the next step. They might say, “That example works, but now the paragraph feels too long. Can you trim the setup?” or “Now add a rebuttal to this counterpoint.” Revision is where the conversation starts to get interesting. It’s also where students start to develop judgment, voice, and control, because they’re not just reacting to feedback, they’re directing it.
And then they go back to Frame. The cycle repeats, each time with sharper context, better prompts, more refined questions. FPAR is not just a strategy for using AI; it’s a structure for thinking. It builds habits of iteration, reflection, and specificity. It turns the student from a passive consumer into an active writer.
Most bad AI writing isn’t the fault of the model. It’s the result of unclear framing, lazy prompting, uncritical acceptance, and shallow revision. The antidote isn’t banning the tool. It’s teaching students how to use it with care. FPAR is how.
No comments:
Post a Comment