Search This Blog

Thursday, March 26, 2026

The Trouble with Refusal Rights: On the CCCC Resolution to Refuse Generative AI

The Conference on College Composition and Communication recently passed a resolution affirming "the rights of students and teachers to refuse to sign up for, prompt, or otherwise use generative AI in the writing classroom." The resolution draws on a companion document, "Refusing GenAI in Writing Studies: A Quickstart Guide" by Jennifer Sano-Franchini, Megan McIntyre, and Maggie Fernandes. I find much of what they say about the political economy of Big Tech persuasive. Yet the central move of casting the argument in the language of rights is a strategic and conceptual misstep.

When we talk about rights, we usually mean one of two things: legal rights, which are enforceable protections backed by law or contract, or moral rights, which are claims about what people are owed regardless of what the law says. The resolution tries to invoke both registers and achieves neither.

The resolution leans on the AAUP's 1940 Statement on Academic Freedom. That document does establish a real professional norm: faculty can select course materials, determine approaches, and assess student work without administrative veto. But notice what this actually protects. It protects faculty autonomy in making curricular decisions. It does not create a specific right to refuse any particular technology. When an instructor decides that a given tool does not serve her course, she exercises normal pedagogical judgment. No one calls that a right. It is simply teaching.

Academic freedom is symmetric. It protects choices, not specific outcomes. A resolution affirming one particular choice as a right tilts the field. It implies that using AI requires justification while refusing it does not.

There is a deeper problem with invoking the AAUP framework here. The AAUP has been explicit that academic freedom in teaching operates on two levels, and the collective level takes precedence. As the AAUP's own FAQ states, the shared academic freedom of a faculty to determine courses and materials "supersedes the freedom of an individual faculty member to choose a textbook that he or she alone prefers." The AAUP's statement on Freedom and Responsibility reinforces this: it is improper for an instructor to fail to present subject matter "as approved by the faculty in their collective responsibility for the curriculum." This is why we have curriculum committees and course approval processes. Individual instructors teach within a collectively sanctioned framework.

The CCCC resolution quietly reverses this logic. It asserts an individual right to refuse a specific technology, independent of any collective deliberation about what a writing curriculum should contain. If we establish the precedent that an individual instructor has a right to refuse AI on principled grounds, what else can an instructor refuse? The learning management system, on grounds that it embodies corporate surveillance? Plagiarism detection software, which the Quickstart Guide itself criticizes? Peer review platforms? Email? Each of these technologies carries ideological baggage. The line between principled refusal and personal preference becomes impossible to draw once you frame the question as one of individual rights rather than collective curricular judgment.

I have also struggled with a simple question while reading the resolution: What specific violation of rights does it aim to prevent? Who is forcing writing teachers to use ChatGPT?

The AAUP's 2025 survey found that 15 percent of faculty said their institution mandates AI use. But the same survey found that 81 percent must use learning management systems with embedded AI features. The "mandate" is mostly that Canvas or Google Workspace now has AI baked in. That is not the same as being told you must assign AI-assisted essays. I cannot find evidence of any American college or university requiring writing faculty to incorporate generative AI into their pedagogy. To the extent that real pressure exists, it takes the form of institutional nudging, not directives that could be resisted by invoking a right.

Rights are most powerful when they address a concrete threat. The right to free speech protects against government censorship. The right to due process protects against arbitrary punishment. What does the right to refuse generative AI protect against? Against being encouraged to try something? Against the zeitgeist? Rights are a heavy instrument. They should be reserved for heavy problems.

Buried inside the resolution is a pedagogical claim: that writing instruction develops human thought and expression, and that outsourcing parts of the writing process to a language model may undermine that development. The claim is debatable but plausible. However, it does not require the language of rights. It requires the language of curriculum and evidence. If generative AI undermines learning outcomes in first-year composition, instructors should not use it. Not because they have a right to refuse, but because it does not serve their students. By framing refusal as a right rather than a pedagogical judgment, the resolution removes the question from the domain where it should be argued and places it in the domain where it can only be asserted. You do not argue against a right. You respect it or you violate it. This forecloses the very inquiry the authors claim to value.

The resolution also affirms a student right to refuse AI. This is even more peculiar. Students are routinely required to use technologies they did not choose: a particular LMS, plagiarism detection services, specific software. No one frames these requirements as rights violations. A course has requirements. An instructor sets them. If the instructor has determined that AI engagement is central to the course, allowing opt-outs means running two parallel courses. If AI is not central, the point is moot.

The real concerns here deserve a better framework. Shared governance matters: if institutions sign contracts with AI companies without consulting faculty, that is a governance problem already addressed by existing norms. Pedagogical autonomy already exists: faculty can and do decide what technologies to use. The evidence question is primary and researchable. And the labor and environmental concerns, while legitimate, are matters of institutional procurement and social policy, not classroom pedagogy. An instructor who refuses ChatGPT on environmental grounds should, by the same logic, refuse Zoom, Canvas, and university email, all of which depend on data centers with significant environmental footprints.

There is a final paradox. The language of rights is supposed to project strength, but here it signals the opposite. When a profession asks for special protections against a new technology, it announces that it cannot figure out, through its own expertise and collective deliberation, how to respond to a changed environment. Math departments did not need a right to refuse calculators. They debated, experimented, and made curricular decisions. Some banned calculators from exams, some required them, and the question was always pedagogical: Does this tool help students learn, or does it let them bypass the thinking? Writing studies has every intellectual resource it needs to conduct the same kind of inquiry. Framing the matter as a right suggests otherwise. It suggests a profession that feels so besieged it must retreat behind quasi-legal barricades rather than do what professions do: deliberate, adapt, and lead.




No comments:

Post a Comment

The Trouble with Refusal Rights: On the CCCC Resolution to Refuse Generative AI

The Conference on College Composition and Communication recently passed a resolution affirming "the rights of students and teachers to ...